65 research outputs found

    A framework for the comparison of different EEG acquisition solutions

    Full text link
    The purpose of this work is to propose a framework for the benchmarking of EEG amplifiers, headsets, and electrodes providing objective recommendation for a given application. The framework covers: data collection paradigm, data analysis, and statistical framework. To illustrate, data was collected from 12 different devices totaling up to 6 subjects per device. Two data acquisition protocols were implemented: a resting-state protocol eyes-open (EO) and eyes-closed (EC), and an Auditory Evoked Potential (AEP) protocol. Signal-to-noise ratio (SNR) on alpha band (EO/EC) and Event Related Potential (ERP) were extracted as objective quantification of physiologically meaningful information. Then, visual representation, univariate statistical analysis, and multivariate model were performed to increase results interpretability. Objective criteria show that the spectral SNR in alpha does not provide much discrimination between systems, suggesting that the acquisition quality might not be of primary importance for spectral and specifically alpha-based applications. On the contrary, AEP SNR proved much more variable stressing the importance of the acquisition setting for ERP experiments. The multivariate analysis identified some individuals and some systems as independent statistically significant contributors to the SNR. It highlights the importance of inter-individual differences in neurophysiological experiments (sample size) and suggests some device might objectively be superior to others when it comes to ERP recordings. However, the illustration of the proposed benchmarking framework suffers from severe limitations including small sample size and sound card jitter in the auditory stimulations. While these limitations hinders a definite ranking of the evaluated hardware, we believe the proposed benchmarking framework to be a modest yet valuable contribution to the field

    Robust Brain-computer interface for virtual Keyboard (RoBIK): project results

    Get PDF
    Special issue : ANR TECSAN : Technologies for Health and AutonomyNational audienceBrain-ComputerInterface (BCI)is a technology that translatesthe brain electrical activity into a command for a device such as a robotic arm, a wheelchair or a spelling device. BCIs have long been described as an assistive technology forseverely disabled patients because they completely bypass the need for muscular activity. The clinical reality is however dramatically different and most patients who use BCIs today are doing so as part of constraining clinical trials. To achieve the technological transfer from bench to bedside, BCI must gain ease of use and robustness of bothmeasure (electroencephalography [EEG]) and interface (signal processing and applications).TheRobustBrain-computerInterface for virtual Keyboard (RoBIK) project aimed atthe development of aBCIsystemfor communication that could be used on a daily basis by patientswithoutthe help of a trained teamofresearchers.To guide further developments cliniciansfirst assessed patients' needs.The prototype subsequently developed consisted in a 14 felt-pad electrodes EEG headsetsampling at 256Hz by an electronic component capable of transmitting signals wirelessly. The application was a virtual keyboard generating a novelstimulation paradigm to elicit P300 Evoked Related Potentials(ERPs) for communication. Raw EEG signals were treated with OpenViBE open-source software including novelsignal processing and stimulation techniques

    Robust Virtual Keyboard for Brain-Computer Interface (ROBIK): An Halfway Update on the Project

    Get PDF
    International audienceThe principle of a Brain-Computer Interface or BCI is to control a device through the extraction and interpretation of signal features from electroencephalograms (EEG) collected either from the surface of the scalp or through invasive measurements. This late idea of communication technique (Vidal 1973), offers the advantage of bypassing the need for muscle activity in the control chain and is therefore presented as a promising alternative to restore communication and control in severely disabled patients (Wolpow, et al. 2002). However, the lack of robustness and ergonomics of both available software and EEG measurement techniques have delayed the transfer of this technology to patients bedsides. The French Research Agency has funded a 3-year project gathering national leaders in microelectronics (CEA-Leti), EEG signal processing (Gipsa-Lab) and clinical management of severely disabled people (Raymond Poincar hospital). The aim of the project is the development and the clinical validation of a Brain-Computer Interface prototype for communication. As an initial step, a survey was carried out to assess patients' and users (family and caretakers) needs, which were translated into specifications, on the basis of which software and hardware were developed. The survey (n=45) highlighted the need for easy-to-setup systems (installation time=15min), which stresses the importance of mechanical comfort and customization of application. The development of signal processing techniques has led to improvements of the P3Speller paradigm. A first prototype of a 32-channel EEG recording system is under development. To ease the EEG measurements and reduce installation time, the system has a reduced size. It includes the analog amplification and digital conversion of 32 channels sampled at 1 kHz, as well as the wireless data transmission to a computer. First in vivo validations were performed on small animals. This system will be optimized and connected to a headset specifically designed to provide a comfortable and handy interface with dry electrodes. The present project will still run for one and a half years ,ending with its clinical validation in a population of severely disabled patients, which will compare performances of the system with existing assistive technologies. At this stage, the proposed system yields very promising results, and outperforms the current state-of-the-art. If such a system is shown to perform better than current users assistive technology, it could reach the commercial availability for severely disabled patients within the next 5 years

    Clinical and Experimental Factors Influencing the Efficacy of Neurofeedback in ADHD: A Meta-Analysis

    Get PDF
    Meta-analyses have been extensively used to evaluate the efficacy of neurofeedback (NFB) treatment for Attention Deficit/Hyperactivity Disorder (ADHD) in children and adolescents. However, each meta-analysis published in the past decade has contradicted the methods and results from the previous one, thus making it difficult to determine a consensus of opinion on the effectiveness of NFB. This works brings continuity to the field by extending and discussing the last and much controversial meta-analysis by Cortese et al. (1). The extension comprises an update of that work including the latest control trials, which have since been published and, most importantly, offers a novel methodology. Specifically, NFB literature is characterized by a high technical and methodological heterogeneity, which partly explains the current lack of consensus on the efficacy of NFB. This work takes advantage of this by performing a Systematic Analysis of Biases (SAOB) in studies included in the previous meta-analysis. Our extended meta-analysis (k = 16 studies) confirmed the previously obtained results of effect sizes in favor of NFB efficacy as being significant when clinical scales of ADHD are rated by parents (non-blind, p-value = 0.0014), but not when they are rated by teachers (probably blind, p-value = 0.27). The effect size is significant according to both raters for the subset of studies meeting the definition of “standard NFB protocols” (parents' p-value = 0.0054; teachers' p-value = 0.043, k = 4). Following this, the SAOB performed on k = 33 trials identified three main factors that have an impact on NFB efficacy: first, a more intensive treatment, but not treatment duration, is associated with higher efficacy; second, teachers report a lower improvement compared to parents; third, using high-quality EEG equipment improves the effectiveness of the NFB treatment. The identification of biases relating to an appropriate technical implementation of NFB certainly supports the efficacy of NFB as an intervention. The data presented also suggest that the probably blind assessment of teachers may not be considered a good proxy for blind assessments, therefore stressing the need for studies with placebo-controlled intervention as well as carefully reported neuromarker changes in relation to clinical response

    Making Big Data Useful for Health Care: A Summary of the Inaugural MIT Critical Data Conference

    Get PDF
    With growing concerns that big data will only augment the problem of unreliable research, the Laboratory of Computational Physiology at the Massachusetts Institute of Technology organized the Critical Data Conference in January 2014. Thought leaders from academia, government, and industry across disciplines--including clinical medicine, computer science, public health, informatics, biomedical research, health technology, statistics, and epidemiology--gathered and discussed the pitfalls and challenges of big data in health care. The key message from the conference is that the value of large amounts of data hinges on the ability of researchers to share data, methodologies, and findings in an open setting. If empirical value is to be from the analysis of retrospective data, groups must continuously work together on similar problems to create more effective peer review. This will lead to improvement in methodology and quality, with each iteration of analysis resulting in more reliability

    Limited usefulness of neurocognitive functioning indices as predictive markers for treatment response to methylphenidate or neurofeedback@home in children and adolescents with ADHD

    Get PDF
    IntroductionEarlier studies exploring the value of executive functioning (EF) indices for assessing treatment effectiveness and predicting treatment response in attention-deficit/hyperactivity disorder (ADHD) mainly focused on pharmacological treatment options and revealed rather heterogeneous results. Envisioning the long-term goal of personalized treatment selection and intervention planning, this study comparing methylphenidate treatment (MPH) and a home-based neurofeedback intervention (NF@Home) aimed to expand previous findings by assessing objective as well as subjectively reported EF indices and by analyzing their value as treatment and predictive markers.MethodsChildren and adolescents (n = 146 in the per protocol sample) aged 7–13 years with a formal diagnosis of an inattentive or combined presentation of ADHD were examined. We explored the EF performance profile using the Conners Continuous Performance Task (CPT) and the BRIEF self-report questionnaire within our prospective, multicenter, randomized, reference drug-controlled NEWROFEED study with sites in five European countries (France, Spain, Switzerland, Germany, and Belgium). As primary outcome for treatment response, the clinician-rated ADHD Rating Scale-IV was used. Patients participating in this non-inferiority trial were randomized to either NF@home (34–40 sessions of TBR or SMR NF depending on the pre-assessed individual alpha peak frequency) or MPH treatment (ratio: 3:2). Within a mixed-effects model framework, analyses of change were calculated to explore the predictive value of neurocognitive indices for ADHD symptom-related treatment response.ResultsFor a variety of neurocognitive indices, we found a significant pre-post change during treatment, mainly in the MPH group. However, the results of the current study reveal a rather limited prognostic value of neurocognitive indices for treatment response to either NF@Home or MPH treatment. Some significant effects emerged for parent-ratings only.DiscussionCurrent findings indicate a potential value of self-report (BRIEF global score) and some objectively measured neurocognitive indices (CPT commission errors and hit reaction time variability) as treatment markers (of change) for MPH. However, we found a rather limited prognostic value with regard to predicting treatment response not (yet) allowing recommendation for clinical use. Baseline symptom severity was revealed as the most relevant predictor, replicating robust findings from previous studies

    Association of the PHACTR1/EDN1 genetic locus with spontaneous coronary artery dissection

    Get PDF
    Background: Spontaneous coronary artery dissection (SCAD) is an increasingly recognized cause of acute coronary syndromes (ACS) afflicting predominantly younger to middle-aged women. Observational studies have reported a high prevalence of extracoronary vascular anomalies, especially fibromuscular dysplasia (FMD) and a low prevalence of coincidental cases of atherosclerosis. PHACTR1/EDN1 is a genetic risk locus for several vascular diseases, including FMD and coronary artery disease, with the putative causal noncoding variant at the rs9349379 locus acting as a potential enhancer for the endothelin-1 (EDN1) gene. Objectives: This study sought to test the association between the rs9349379 genotype and SCAD. Methods: Results from case control studies from France, United Kingdom, United States, and Australia were analyzed to test the association with SCAD risk, including age at first event, pregnancy-associated SCAD (P-SCAD), and recurrent SCAD. Results: The previously reported risk allele for FMD (rs9349379-A) was associated with a higher risk of SCAD in all studies. In a meta-analysis of 1,055 SCAD patients and 7,190 controls, the odds ratio (OR) was 1.67 (95% confidence interval [CI]: 1.50 to 1.86) per copy of rs9349379-A. In a subset of 491 SCAD patients, the OR estimate was found to be higher for the association with SCAD in patients without FMD (OR: 1.89; 95% CI: 1.53 to 2.33) than in SCAD cases with FMD (OR: 1.60; 95% CI: 1.28 to 1.99). There was no effect of genotype on age at first event, P-SCAD, or recurrence. Conclusions: The first genetic risk factor for SCAD was identified in the largest study conducted to date for this condition. This genetic link may contribute to the clinical overlap between SCAD and FMD

    Applications médicales des BCI pour la communication des patients

    No full text
    National audienceCommuniquer constitue le défi quotidien d'un grand nombre de personnes qui n'ont plus l'usage de la parole et dont la gestuelle est très limitée: personnes cérébrolésées ou souffrant d'une infirmité motrice cérébrale, d'une sclérose en plaques, d'une sclérose latérale amyotrophique (SLA), victimes d'un locked-in syndrome (LIS) ou encore de patients de réanimation conscients mais totalement paralysés (syndrome de Guillain-Barré, neuropathies de réanimation, etc.). Si la famille et l'interlocuteur humain restent souvent l'aide à la communication la plus performante, le besoin d'aides techniques pertinentes pour la communication s'intensifie chaque jour davantage. De fait, les personnes en situation de handicap (PSH) ont souvent été pionnières dans l'utilisation d'outils de nouvelles technologies en vue de compenser leurs déficiences et de restaurer le lien familial et social. Sans aucun doute, les technologies de l'information et de la communication (TIC) et leurs applications ont le potentiel de favoriser considérablement les possibilités d'intégration, d'autonomie et de sécurité des PSH dans de nombreux aspects de leur vie quotidienne. Ainsi, le financement de la recherche publique sur les BCI a longtemps bénéficié de la promesse de restauration du contrôle et de la communication chez les PSH. Malheureusement, plus de quarante ans après les premières expériences de BCI menées par Vidal et coll., les promesses n'ont été que partiellement tenues. Cependant, les avancées importantes réalisées dans le domaine de l'apprentissage supervisé et la collecte de données d'electroencéphalographie (EEG) en conjonction d'une baisse importante des coûts de ces systèmes laisse entrevoir une issue heureuse pour l'utilisation de cette technologie au chevet des patients. Nous présentons donc ici les travaux récents qui abondent dans cette direction, en portant une attention toute particulière aux quelques équipes dans le monde qui prennent le soin de porter ces systèmes à des niveaux de performance tels qu'ils puissent être utilisés par des équipes pluridisciplinaires pour équiper des patients

    Prediction of mortality in septic patients with hypotension

    No full text
    Sepsis remains the second largest killer in the Intensive Care Unit (ICU), giving rise to a significant economic burden ($17b per annum in the US, 0.3% of the gross domestic product). The aim of the work described in this thesis is to improve the estimation of severity in this population, with a view to improving the allocation of resources. A cohort of 2,143 adult patients with sepsis and hypotension was identified from the MIMIC-II database (v2.26). The implementation of state-of-the-art models confirms the superiority of the APACHE-IV model (AUC=73.3%) for mortality prediction using ICU admission data. Using the same subset of features, state-of-the art machine learning techniques (Support Vector Machines and Random Forests) give equivalent results. More recent mortality prediction models are also implemented and offer an improvement in discriminatory power (AUC=76.16%). A shift from expert-driven selection of variables to objective feature selection techniques using all available covariates leads to a major gain in performance (AUC=80.4%). A framework allowing simultaneous feature selection and parameter pruning is developed, using a genetic algorithm, and this offers similar performance. The model derived from the first 24 hours in the ICU is then compared with a “dynamic” model derived over the same time period, and this leads to a significant improvement in performance (AUC=82.7%). The study is then repeated using data surrounding the hypotensive episode in an attempt to capture the physiological response to hypotension and the effects of treatment. A significant increase in performance (AUC=85.3%) is obtained with the static model incorporating data both before and after the hypotensive episode. The equivalent dynamic model does not demonstrate a statistically significant improvement (AUC=85.6%). Testing on other ICU populations with sepsis is needed to validate the findings of this thesis, but the results presented in it highlight the role that data mining will increasingly play in clinical knowledge generation.This thesis is currently not available in ORA
    corecore